顺序决策的一种流行方法是,以机器学习(ML)方法(如策略学习)进行基于模拟器的搜索。另一方面,如果有完整的声明模型,模型放松启发式方法可以有效地指导搜索。在这项工作中,我们考虑了从业人员如何在无法使用完整符号模型的设置上改善基于ML的黑盒计划。我们表明,指定一个不完整的条带模型,该模型仅描述了问题的一部分,才能使用放松启发式方法。我们对几个计划域的发现表明,这是改善基于ML的黑盒计划的有效方法,而不是收集更多数据或调整ML架构。
translated by 谷歌翻译
Content moderation is the process of screening and monitoring user-generated content online. It plays a crucial role in stopping content resulting from unacceptable behaviors such as hate speech, harassment, violence against specific groups, terrorism, racism, xenophobia, homophobia, or misogyny, to mention some few, in Online Social Platforms. These platforms make use of a plethora of tools to detect and manage malicious information; however, malicious actors also improve their skills, developing strategies to surpass these barriers and continuing to spread misleading information. Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems. In response to this recent ongoing issue, this paper presents an innovative approach to address this linguistic trend in social networks through the simulation of different content evasion techniques and a multilingual Transformer model for content evasion detection. In this way, we share with the rest of the scientific community a multilingual public tool, named "pyleetspeak" to generate/simulate in a customizable way the phenomenon of content evasion through automatic word camouflage and a multilingual Named-Entity Recognition (NER) Transformer-based model tuned for its recognition and detection. The multilingual NER model is evaluated in different textual scenarios, detecting different types and mixtures of camouflage techniques, achieving an overall weighted F1 score of 0.8795. This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content on social networks, making the fight against information disorders more effective.
translated by 谷歌翻译
Spacecraft pose estimation is a key task to enable space missions in which two spacecrafts must navigate around each other. Current state-of-the-art algorithms for pose estimation employ data-driven techniques. However, there is an absence of real training data for spacecraft imaged in space conditions due to the costs and difficulties associated with the space environment. This has motivated the introduction of 3D data simulators, solving the issue of data availability but introducing a large gap between the training (source) and test (target) domains. We explore a method that incorporates 3D structure into the spacecraft pose estimation pipeline to provide robustness to intensity domain shift and we present an algorithm for unsupervised domain adaptation with robust pseudo-labelling. Our solution has ranked second in the two categories of the 2021 Pose Estimation Challenge organised by the European Space Agency and the Stanford University, achieving the lowest average error over the two categories.
translated by 谷歌翻译
Aliasing is a highly important concept in signal processing, as careful consideration of resolution changes is essential in ensuring transmission and processing quality of audio, image, and video. Despite this, up until recently aliasing has received very little consideration in Deep Learning, with all common architectures carelessly sub-sampling without considering aliasing effects. In this work, we investigate the hypothesis that the existence of adversarial perturbations is due in part to aliasing in neural networks. Our ultimate goal is to increase robustness against adversarial attacks using explainable, non-trained, structural changes only, derived from aliasing first principles. Our contributions are the following. First, we establish a sufficient condition for no aliasing for general image transformations. Next, we study sources of aliasing in common neural network layers, and derive simple modifications from first principles to eliminate or reduce it. Lastly, our experimental results show a solid link between anti-aliasing and adversarial attacks. Simply reducing aliasing already results in more robust classifiers, and combining anti-aliasing with robust training out-performs solo robust training on $L_2$ attacks with none or minimal losses in performance on $L_{\infty}$ attacks.
translated by 谷歌翻译
In this paper, we consider the problem where a drone has to collect semantic information to classify multiple moving targets. In particular, we address the challenge of computing control inputs that move the drone to informative viewpoints, position and orientation, when the information is extracted using a "black-box" classifier, e.g., a deep learning neural network. These algorithms typically lack of analytical relationships between the viewpoints and their associated outputs, preventing their use in information-gathering schemes. To fill this gap, we propose a novel attention-based architecture, trained via Reinforcement Learning (RL), that outputs the next viewpoint for the drone favoring the acquisition of evidence from as many unclassified targets as possible while reasoning about their movement, orientation, and occlusions. Then, we use a low-level MPC controller to move the drone to the desired viewpoint taking into account its actual dynamics. We show that our approach not only outperforms a variety of baselines but also generalizes to scenarios unseen during training. Additionally, we show that the network scales to large numbers of targets and generalizes well to different movement dynamics of the targets.
translated by 谷歌翻译
Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, it is the most frequent cause of blindness in developed countries. Although some promising treatments have been developed, their effectiveness is low in advanced stages. This emphasizes the importance of large-scale screening programs. Nevertheless, implementing such programs for AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. All this motivates the development of automatic methods. In this sense, several works have achieved positive results for AMD diagnosis using convolutional neural networks (CNNs). However, none incorporates explainability mechanisms, which limits their use in clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. In our proposal, a CNN is trained end-to-end for the joint task using image-level labels. The provided lesion information is of clinical interest, as it allows to assess the developmental stage of AMD. Additionally, the approach allows to explain the diagnosis from the identified lesions. This is possible thanks to the use of a CNN with a custom setting that links the lesions and the diagnosis. Furthermore, the proposed setting also allows to obtain coarse lesion segmentation maps in a weakly-supervised way, further improving the explainability. The training data for the approach can be obtained without much extra work by clinicians. The experiments conducted demonstrate that our approach can identify AMD and its associated lesions satisfactorily, while providing adequate coarse segmentation maps for most common lesions.
translated by 谷歌翻译
Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias. Existing work focuses on a handful of curated generative processes which require expert knowledge to design, making it hard to scale up. To overcome this, we propose training with a large dataset of twenty-one thousand programs, each one generating a diverse set of synthetic images. These programs are short code snippets, which are easy to modify and fast to execute using OpenGL. The proposed dataset can be used for both supervised and unsupervised representation learning, and reduces the gap between pre-training with real and procedurally generated images by 38%.
translated by 谷歌翻译
Modern retrieval system often requires recomputing the representation of every piece of data in the gallery when updating to a better representation model. This process is known as backfilling and can be especially costly in the real world where the gallery often contains billions of samples. Recently, researchers have proposed the idea of Backward Compatible Training (BCT) where the new representation model can be trained with an auxiliary loss to make it backward compatible with the old representation. In this way, the new representation can be directly compared with the old representation, in principle avoiding the need for any backfilling. However, followup work shows that there is an inherent tradeoff where a backward compatible representation model cannot simultaneously maintain the performance of the new model itself. This paper reports our ``not-so-surprising'' finding that adding extra dimensions to the representation can help here. However, we also found that naively increasing the dimension of the representation did not work. To deal with this, we propose Backward-compatible Training with a novel Basis Transformation ($BT^2$). A basis transformation (BT) is basically a learnable set of parameters that applies an orthonormal transformation. Such a transformation possesses an important property whereby the original information contained in its input is retained in its output. We show in this paper how a BT can be utilized to add only the necessary amount of additional dimensions. We empirically verify the advantage of $BT^2$ over other state-of-the-art methods in a wide range of settings. We then further extend $BT^2$ to other challenging yet more practical settings, including significant change in model architecture (CNN to Transformers), modality change, and even a series of updates in the model architecture mimicking the evolution of deep learning models.
translated by 谷歌翻译
我们介绍了一种新的图像取证方法:将物理折射物(我们称为图腾)放入场景中,以保护该场景拍摄的任何照片。图腾弯曲并重定向光线,因此在单个图像中提供了多个(尽管扭曲)的多个(尽管扭曲)。防守者可以使用这些扭曲的图腾像素来检测是否已操纵图像。我们的方法通过估计场景中的位置并使用其已知的几何和材料特性来估算其位置,从而使光线通过图腾的光线不十障。为了验证图腾保护的图像,我们从图腾视点重建的场景与场景的外观从相机的角度来检测到不一致之处。这样的方法使对抗性操纵任务更加困难,因为对手必须以几何一致的方式对图腾和图像像素进行修改,而又不知道图腾的物理特性。与先前的基于学习的方法不同,我们的方法不需要在特定操作的数据集上进行培训,而是使用场景和相机的物理属性来解决取证问题。
translated by 谷歌翻译
视网膜脉管系统的研究是筛查和诊断许多疾病的基本阶段。完整的视网膜血管分析需要将视网膜的血管分为动脉和静脉(A/V)。早期自动方法在两个顺序阶段接近这些分割和分类任务。但是,目前,这些任务是作为联合语义分割任务处理的,因为分类结果在很大程度上取决于血管分割的有效性。在这方面,我们提出了一种新的方法,用于从眼睛眼睛图像中对视网膜A/V进行分割和分类。特别是,我们提出了一种新颖的方法,该方法与以前的方法不同,并且由于新的损失,将联合任务分解为针对动脉,静脉和整个血管树的三个分割问题。这种配置允许直观地处理容器交叉口,并直接提供不同靶血管树的精确分割罩。提供的关于公共视网膜图血管树提取(RITE)数据集的消融研究表明,所提出的方法提供了令人满意的性能,尤其是在不同结构的分割中。此外,与最新技术的比较表明,我们的方法在A/V分类中获得了高度竞争的结果,同时显着改善了血管分割。提出的多段方法允许检测更多的血管,并更好地分割不同的结构,同时实现竞争性分类性能。同样,用这些术语来说,我们的方法优于各种参考作品的方法。此外,与以前的方法相比,该方法允许直接检测到容器交叉口,并在这些复杂位置保留A/V的连续性。
translated by 谷歌翻译